12 research outputs found

    Condorcet-Consistent and Approximately Strategyproof Tournament Rules

    Get PDF
    We consider the manipulability of tournament rules for round-robin tournaments of nn competitors. Specifically, nn competitors are competing for a prize, and a tournament rule rr maps the result of all (n2)\binom{n}{2} pairwise matches (called a tournament, TT) to a distribution over winners. Rule rr is Condorcet-consistent if whenever ii wins all n−1n-1 of her matches, rr selects ii with probability 11. We consider strategic manipulation of tournaments where player jj might throw their match to player ii in order to increase the likelihood that one of them wins the tournament. Regardless of the reason why jj chooses to do this, the potential for manipulation exists as long as Pr⁥[r(T)=i]\Pr[r(T) = i] increases by more than Pr⁥[r(T)=j]\Pr[r(T) = j] decreases. Unfortunately, it is known that every Condorcet-consistent rule is manipulable (Altman and Kleinberg). In this work, we address the question of how manipulable Condorcet-consistent rules must necessarily be - by trying to minimize the difference between the increase in Pr⁥[r(T)=i]\Pr[r(T) = i] and decrease in Pr⁥[r(T)=j]\Pr[r(T) = j] for any potential manipulating pair. We show that every Condorcet-consistent rule is in fact 1/31/3-manipulable, and that selecting a winner according to a random single elimination bracket is not α\alpha-manipulable for any α>1/3\alpha > 1/3. We also show that many previously studied tournament formats are all 1/21/2-manipulable, and the popular class of Copeland rules (any rule that selects a player with the most wins) are all in fact 11-manipulable, the worst possible. Finally, we consider extensions to match-fixing among sets of more than two players.Comment: 20 page

    Approximately Strategyproof Tournament Rules: On Large Manipulating Sets and Cover-Consistence

    Get PDF
    We consider the manipulability of tournament rules, in which n teams play a round robin tournament and a winner is (possibly randomly) selected based on the outcome of all binom{n}{2} matches. Prior work defines a tournament rule to be k-SNM-? if no set of ? k teams can fix the ? binom{k}{2} matches among them to increase their probability of winning by >? and asks: for each k, what is the minimum ?(k) such that a Condorcet-consistent (i.e. always selects a Condorcet winner when one exists) k-SNM-?(k) tournament rule exists? A simple example witnesses that ?(k) ? (k-1)/(2k-1) for all k, and [Jon Schneider et al., 2017] conjectures that this is tight (and prove it is tight for k=2). Our first result refutes this conjecture: there exists a sufficiently large k such that no Condorcet-consistent tournament rule is k-SNM-1/2. Our second result leverages similar machinery to design a new tournament rule which is k-SNM-2/3 for all k (and this is the first tournament rule which is k-SNM-(<1) for all k). Our final result extends prior work, which proves that single-elimination bracket with random seeding is 2-SNM-1/3 [Jon Schneider et al., 2017], in a different direction by seeking a stronger notion of fairness than Condorcet-consistence. We design a new tournament rule, which we call Randomized-King-of-the-Hill, which is 2-SNM-1/3 and cover-consistent (the winner is an uncovered team with probability 1)

    Approximation Schemes for a Unit-Demand Buyer with Independent Items via Symmetries

    Full text link
    We consider a revenue-maximizing seller with nn items facing a single buyer. We introduce the notion of symmetric menu complexity of a mechanism, which counts the number of distinct options the buyer may purchase, up to permutations of the items. Our main result is that a mechanism of quasi-polynomial symmetric menu complexity suffices to guarantee a (1−Δ)(1-\varepsilon)-approximation when the buyer is unit-demand over independent items, even when the value distribution is unbounded, and that this mechanism can be found in quasi-polynomial time. Our key technical result is a polynomial time, (symmetric) menu-complexity-preserving black-box reduction from achieving a (1−Δ)(1-\varepsilon)-approximation for unbounded valuations that are subadditive over independent items to achieving a (1−O(Δ))(1-O(\varepsilon))-approximation when the values are bounded (and still subadditive over independent items). We further apply this reduction to deduce approximation schemes for a suite of valuation classes beyond our main result. Finally, we show that selling separately (which has exponential menu complexity) can be approximated up to a (1−Δ)(1-\varepsilon) factor with a menu of efficient-linear (f(Δ)⋅n)(f(\varepsilon) \cdot n) symmetric menu complexity.Comment: FOCS 201

    Coding in Undirected Graphs Is Either Very Helpful or Not Helpful at All

    Get PDF
    While it is known that using network coding can significantly improve the throughput of directed networks, it is a notorious open problem whether coding yields any advantage over the multicommodity flow (MCF) rate in undirected networks. It was conjectured that the answer is no. In this paper we show that even a small advantage over MCF can be amplified to yield a near-maximum possible gap. We prove that any undirected network with k source-sink pairs that exhibits a (1+epsilon) gap between its MCF rate and its network coding rate can be used to construct a family of graphs G\u27 whose gap is log(|G\u27|)^c for some constant c < 1. The resulting gap is close to the best currently known upper bound, log(|G\u27|), which follows from the connection between MCF and sparsest cuts. Our construction relies on a gap-amplifying graph tensor product that, given two graphs G1,G2 with small gaps, creates another graph G with a gap that is equal to the product of the previous two, at the cost of increasing the size of the graph. We iterate this process to obtain a gap of log(|G\u27|)^c from any initial gap

    The Fewest Clues Problem

    Get PDF
    When analyzing the computational complexity of well-known puzzles, most papers consider the algorithmic challenge of solving a given instance of (a generalized form of) the puzzle. We take a different approach by analyzing the computational complexity of designing a "good" puzzle. We assume a puzzle maker designs part of an instance, but before publishing it, wants to ensure that the puzzle has a unique solution. Given a puzzle, we introduce the FCP (fewest clues problem) version of the problem: Given an instance to a puzzle, what is the minimum number of clues we must add in order to make the instance uniquely solvable? We analyze this question for the Nikoli puzzles Sudoku, Shakashaka, and Akari. Solving these puzzles is NP-complete, and we show their FCP versions are Sigma_2^P-complete. Along the way, we show that the FCP versions of 3SAT, 1-in-3SAT, Triangle Partition, Planar 3SAT, and Latin Square are all Sigma_2^P-complete. We show that even problems in P have difficult FCP versions, sometimes even Sigma_2^P-complete, though "closed under cluing" problems are in the (presumably) smaller class NP; for example, FCP 2SAT is NP-complete

    Circumventing Lower Bounds in Mechanism and Tournament Design

    No full text
    How should a seller price multiple items in order to maximize their revenue from a single buyer? This question has been key in connecting research done by computer scientists and economists to applications such as online ad auctions and spectrum auctions. It was shown almost 40 years ago that if the buyer only cares for one item it is optimal for the seller to offer the buyer the item at a take-it-or-leave-it price. This elegant solution, however, may be far from optimal even for the case with a single buyer interested in just two items. Optimal auction design for two or more items is intricate for a number of reasons: the auctions may be bizarre, computationally hard to find or simply too complex to present to a bidder. Numerous results suggest that (under reasonable complexity assumptions) efficiently finding optimal auctions, even in some simple settings, is impossible. In this thesis, I will present numerous ways in which we try to circumvent these impossibility results in order to recover positive results. These include the study of different multi-item models and beyond-worst case analysis for mechanism design. I will also discuss recent advances in tournament design, a field at the intersection of mechanism design and social choice theory. For problems in this field we get around known impossibility results via approximations in order to find approximately strategy-proof tournament rules

    Fine-Grained Buy-Many Mechanisms Are Not Much Better Than Bundling

    Full text link
    Multi-item optimal mechanisms are known to be extremely complex, often offering buyers randomized lotteries of goods. In the standard buy-one model it is known that optimal mechanisms can yield revenue infinitely higher than that of any "simple" mechanism, even for the case of just two items and a single buyer. We introduce a new class of mechanisms, buy-kk mechanisms, which smoothly interpolates between the classical buy-one mechanisms and buy-many mechanisms. Buy-kk mechanisms allow the buyer to (non-adaptively) buy up to kk many menu options. We show that restricting the seller to the class of buy-nn mechanisms suffices to overcome the bizarre, infinite revenue properties of the buy-one model for the case of a single, additive buyer. The revenue gap with respect to bundling, an extremely simple mechanism, is bounded by O(n3)O(n^3) for any arbitrarily correlated distribution D\mathcal{D} over nn items. For the special case of n=2n=2, we show that the revenue-optimal buy-2 mechanism gets no better than 40 times the revenue from bundling. Our upper bounds also hold for the case of adaptive buyers. Finally, we show that allowing the buyer to purchase a small number of menu options does not suffice to guarantee sub-exponential approximations. If the buyer is only allowed to buy k=Θ(n1/2−Δ)k = \Theta(n^{1/2-\varepsilon}) many menu options, the gap between the revenue-optimal buy-kk mechanism and bundling may be exponential in nn. This implies that no "simple" mechanism can get a sub-exponential approximation in this regime. Moreover, our lower bound instance, based on combinatorial designs and cover-free sets, uses a buy-kk deterministic mechanism. This allows us to extend our lower bound to the case of adaptive buyers

    The fewest clues problem

    No full text
    When analyzing the computational complexity of well-known puzzles, most papers consider the algorithmic challenge of solving a given instance of (a generalized form of) the puzzle. We take a different approach by analyzing the computational complexity of designing a “good” puzzle. We assume a puzzle maker designs part of an instance, but before publishing it, wants to ensure that the puzzle has a unique solution. Given a puzzle, we introduce the FCP (fewest clues problem) version of the problem: Given an instance to a puzzle, what is the minimum number of clues we must add in order to make the instance uniquely solvable?We analyze this question for the Nikoli puzzles Sudoku, Shakashaka, and Akari. Solving these puzzles is NP-complete, and we show their FCP versions are Σ2P-complete. Along the way, we show that the FCP versions of TRIANGLE PARTITION, PLANAR 1-IN-3 SAT, and LATIN SQUARE are all Σ2P-complete. We show that even problems in P have difficult FCP versions, sometimes even Σ2P-complete, though “closed under cluing” problems are in the (presumably) smaller class NP; for example, FCP 2SAT is NP-complete.NSF (Grant DGE-16-44869
    corecore